Optimizing Sparse Representations for Data
نویسنده
چکیده
Sparse program representations allow inter-statement dependences to be represented explicitly, enabling dataaow analyzers to restrict the propagation of information to paths where it could potentially aaect the dataaow solution. This paper describes the use of a single sparse program representation , the value dependence graph, in both general and analysis-specic contexts, and demonstrates its utility in reducing the cost of dataaow analysis. We nd that several semantics-preserving transformations are beneecial in both contexts. 1 Introduction The goal of dataaow analysis is to compute assertions about certain program quantities at particular points in a program, such as \after this statement is executed, variable x always has the value 2" or \this procedure call will leave variable z unchanged." A typical dataaow analyzer will associate a set of assertions with each program point, treat each program statement as a mapping function over such assertion sets, and solve the resulting system of equations by iterative or structured means. The memory cost of such an analysis is thus proportional to the number of possible assertion sets and the number of points at which these sets must be stored, while the time cost is proportional to the number of assertion sets and the number of statements through which they must be propagated. Sparse program representations attempt to minimize both of these costs by more directly connecting producers of values with their consumers, so that fewer assertions are kept at each point, and fewer need be propagated through each statement. General sparse representations such as static single assignment (SSA) form CFR + 89] completely describe a program's behavior, while analysis-speciic representations such as sparse evaluation graphs (SEGs) CCF91] represent only those program statements needed to solve a particular dataaow problem. This paper describes the use of a single sparse program
منابع مشابه
Image Classification via Sparse Representation and Subspace Alignment
Image representation is a crucial problem in image processing where there exist many low-level representations of image, i.e., SIFT, HOG and so on. But there is a missing link across low-level and high-level semantic representations. In fact, traditional machine learning approaches, e.g., non-negative matrix factorization, sparse representation and principle component analysis are employed to d...
متن کاملFace Recognition in Thermal Images based on Sparse Classifier
Despite recent advances in face recognition systems, they suffer from serious problems because of the extensive types of changes in human face (changes like light, glasses, head tilt, different emotional modes). Each one of these factors can significantly reduce the face recognition accuracy. Several methods have been proposed by researchers to overcome these problems. Nonetheless, in recent ye...
متن کاملSparse Kernel Learning for Image Set Classification
No single universal image set representation can efficiently encode all types of image set variations. In the absence of expensive validation data, automatically ranking representations with respect to performance is a challenging task. We propose a sparse kernel learning algorithm for automatic selection and integration of the most discriminative subset of kernels derived from different image ...
متن کاملSpMacho - Optimizing Sparse Linear Algebra Expressions with Probabilistic Density Estimation
In the age of statistical and scientific databases, there is an emerging trend of integrating analytical algorithms into database systems. Many of these algorithms are based on linear algebra with large, sparse matrices. However, linear algebra expressions often contain multiplications of more then two matrices. The execution of sparse matrix chains is nontrivial, since the runtime depends on t...
متن کاملHLP$@$UPenn at SemEval-2017 Task 4A: A simple, self-optimizing text classification system combining dense and sparse vectors
We present a simple supervised text classification system that combines sparse and dense vector representations of words, and the generalized representations of words via clusters. The sparse vectors are generated from word n-gram sequences (13). The dense vector representations of words (embeddings) are learned by training a neural network to predict neighboring words in a large unlabeled data...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1995